58 research outputs found

    Throughput Maximization in Multiprocessor Speed-Scaling

    Full text link
    We are given a set of nn jobs that have to be executed on a set of mm speed-scalable machines that can vary their speeds dynamically using the energy model introduced in [Yao et al., FOCS'95]. Every job jj is characterized by its release date rjr_j, its deadline djd_j, its processing volume pi,jp_{i,j} if jj is executed on machine ii and its weight wjw_j. We are also given a budget of energy EE and our objective is to maximize the weighted throughput, i.e. the total weight of jobs that are completed between their respective release dates and deadlines. We propose a polynomial-time approximation algorithm where the preemption of the jobs is allowed but not their migration. Our algorithm uses a primal-dual approach on a linearized version of a convex program with linear constraints. Furthermore, we present two optimal algorithms for the non-preemptive case where the number of machines is bounded by a fixed constant. More specifically, we consider: {\em (a)} the case of identical processing volumes, i.e. pi,j=pp_{i,j}=p for every ii and jj, for which we present a polynomial-time algorithm for the unweighted version, which becomes a pseudopolynomial-time algorithm for the weighted throughput version, and {\em (b)} the case of agreeable instances, i.e. for which rirjr_i \le r_j if and only if didjd_i \le d_j, for which we present a pseudopolynomial-time algorithm. Both algorithms are based on a discretization of the problem and the use of dynamic programming

    Temperature Aware Online Algorithms for Minimizing Flow Time

    Full text link

    Models and algorithms for energy-efficient scheduling with immediate start of jobs

    No full text
    We study a scheduling model with speed scaling for machines and the immediate start requirement for jobs. Speed scaling improves the system performance, but incurs the energy cost. The immediate start condition implies that each job should be started exactly at its release time. Such a condition is typical for modern Cloud computing systems with abundant resources. We consider two cost functions, one that represents the quality of service and the other that corresponds to the cost of running. We demonstrate that the basic scheduling model to minimize the aggregated cost function with n jobs is solvable in O(nlogn) time in the single-machine case and in O(n²m) time in the case of m parallel machines. We also address additional features, e.g., the cost of job rejection or the cost of initiating a machine. In the case of a single machine, we present algorithms for minimizing one of the cost functions subject to an upper bound on the value of the other, as well as for finding a Pareto-optimal solution

    On the Hitting Set of Bundles Problem

    No full text
    Le problème de l'ensemble minimal de paquets (minimal hitting set of bundles problem ou HSB) est défini comme suit. On dispose d'un ensemble E = {e1, e2, . . . , en} de n éléments. Chaque élément ei (i = 1, . . . , n) a un coût positif ou nul ci. Un paquet b est un sous ensemble de E. On dispose aussi d'une collection S = {S1, S2, . . . , Sm} de m ensembles de paquets. De manière plus précise, chaque ensemble Sj (j = 1, . . . ,m) est composé de g(j) paquets distincts notés b1j , b2j , . . . , bg(j) j . Une solution du problème HSB est un sous ensemble E0 E tel que pour tout Sj 2 S, au moins un paquet est couvert, i.e. bl j E0. Le coût total de la solution, noté C(E0), est P{i|ei2E0} ci. Le problème consiste à trouver une solution de coût total minimum. Nous donnons un algorithme déterministe N(1 − (1 − 1 N )M)-approché, où N est le nombre maximal de paquets par ensemble etM est le nombre maximal d'ensembles à qui un élément appartient. Le rapport d'approximation est à peu de choses près le meilleur que l'on puisse proposer car on peut montrer que HSB ne peut être approché avec un rapport 7/6 − lorsque N = 2 et N − 1 − lorsque N 3. L'algorithme proposé est aussi le premier offrant une garantie de performance pour le problème classique d'optimisation de requêtes multiples [9, 10]. Son rapport d'approximation pour le problème MIN k−SAT dont il est une généralisation est le même que celui du meilleur algorithme connu [3]

    Optimal data placement on networks with a constant number of clients

    No full text
    We introduce optimal algorithms for the problems of data placement (DP) and page placement (PP) in networks with a constant number of clients each of which has limited storage availability and issues requests for data objects. The objective for both problems is to efficiently utilize each client's storage (deciding where to place replicas of objects) so that the total incurred access and installation cost over all clients are minimized. In the PP problem an extra constraint on the maximum number of clients served by a single client must be satisfied. Our algorithms solve both problems optimally when all objects have uniform lengths. When object lengths are non-uniform we also find the optimal solution, albeit a small, asymptotically tight violation of each client's storage size by εlmax where lmax is the maximum length of the objects and ε some arbitrarily small positive constant. We make no assumption on the underlying topology of the network (metric, ultrametric, etc.), thus obtaining the first non-trivial results for non-metric data placement problems. © 2013 Elsevier B.V

    On the parallel complexity of the alternating Hamiltonian cycle problem

    No full text
    Given a graph with colored edges, a Hamiltonian cycle is called alternating if its successive edges differ in color. The problem of finding such a cycle, even for 2-edge-colored graphs, is trivially NP-complete, while it is known to be polynomial for 2-edge-colored complete graphs. In this paper we study the parallel complexity of finding such a cycle, if any, in 2-edge-colored complete graphs. We give a new characterization for such a graph admitting an alternating Hamiltonian cycle which allows us to derive a parallel algorithm for the problem. Our parallel solution uses a perfect matching algorithm putting the alternating Hamiltonian cycle problem to the RNC class. In addition, a sequential version of our parallel algorithm improves the computation time of the fastest known sequential algorithm for the alternating Hamiltonian cycle problem by a factor of O(n)O(\sqrt {n} )
    corecore